17 research outputs found

    Compressive sensing based image processing and energy-efficient hardware implementation with application to MRI and JPG 2000

    Get PDF
    In the present age of technology, the buzzwords are low-power, energy-efficient and compact systems. This directly leads to the date processing and hardware techniques employed in the core of these devices. One of the most power-hungry and space-consuming schemes is that of image/video processing, due to its high quality requirements. In current design methodologies, a point has nearly been reached in which physical and physiological effects limit the ability to just encode data faster. These limits have led to research into methods to reduce the amount of acquired data without degrading image quality and increasing the energy consumption. Compressive sensing (CS) has emerged as an efficient signal compression and recovery technique, which can be used to efficiently reduce the data acquisition and processing. It exploits the sparsity of a signal in a transform domain to perform sampling and stable recovery. This is an alternative paradigm to conventional data processing and is robust in nature. Unlike the conventional methods, CS provides an information capturing paradigm with both sampling and compression. It permits signals to be sampled below the Nyquist rate, and still allowing optimal reconstruction of the signal. The required measurements are far less than those of conventional methods, and the process is non-adaptive, making the sampling process faster and universal. In this thesis, CS methods are applied to magnetic resonance imaging (MRI) and JPEG 2000, which are popularly used imaging techniques in clinical applications and image compression, respectively. Over the years, MRI has improved dramatically in both imaging quality and speed. This has further revolutionized the field of diagnostic medicine. However, imaging speed, which is essential to many MRI applications still remains a major challenge. The specific challenge addressed in this work is the use of non-Fourier based complex measurement-based data acquisition. This method provides the possibility of reconstructing high quality MRI data with minimal measurements, due to the high incoherence between the two chosen matrices. Similarly, JPEG2000, though providing a high compression, can be further improved upon by using compressive sampling. In addition, the image quality is also improved. Moreover, having a optimized JPEG 2000 architecture reduces the overall processing, and a faster computation when combined with CS. Considering the requirements, this thesis is presented in two parts. In the first part: (1) A complex Hadamard matrix (CHM) based 2D and 3D MRI data acquisition with recovery using a greedy algorithm is proposed. The CHM measurement matrix is shown to satisfy the necessary condition for CS, known as restricted isometry property (RIP). The sparse recovery is done using compressive sampling matching pursuit (CoSaMP); (2) An optimized matrix and modified CoSaMP is presented, which enhances the MRI performance when compared with the conventional sampling; (3) An energy-efficient, cost-efficient hardware design based on field programmable gate array (FPGA) is proposed, to provide a platform for low-cost MRI processing hardware. At every stage, the design is proven to be superior with other commonly used MRI-CS methods and is comparable with the conventional MRI sampling. In the second part, CS techniques are applied to image processing and is combined with JPEG 2000 coder. While CS can reduce the encoding time, the effect on the overall JPEG 2000 encoder is not very significant due to some complex JPEG 2000 algorithms. One problem encountered is the big-level operations in JPEG 2000 arithmetic encoding (AE), which is completely based on bit-level operations. In this work, this problem is tackled by proposing a two-symbol AE with an efficient FPGA based hardware design. Furthermore, this design is energy-efficient, fast and has lower complexity when compared to conventional JPEG 2000 encoding

    An FPGA-based fast two-symbol processing architecture for JPEG 2000 arithmetic coding

    Get PDF
    In this paper, a field-programmable gate array (FPGA) based enhanced architecture of the arithmetic coder is proposed, which processes two symbols per clock cycle as compared to the conventional architecture that processes only one symbol per clock. The input to the arithmetic coder is from the bit-plane coder, which generates more than two context-decision pairs per clock cycle. But due to the slow processing speed of the arithmetic coder, the overall encoding becomes slow. Hence, to overcome this bottleneck and speed up the process, a two-symbol architecture is proposed which not only doubles the throughput, but also can be operated at frequencies greater than 100 MHz. This architecture achieves a throughput of 210 Msymbols/sec and the critical path is at 9.457 ns

    A novel image compressive sensing method based on complex measurements

    No full text
    Compressive sensing (CS) has emerged as an efficient signal compression and recovery technique, that exploits the sparsity of a signal in a transform domain to perform sampling and stable recovery. The existing image compression methods have complex coding techniques involved and are also vulnerable to errors. In this paper, we propose a novel image compression and recovery scheme based on compressive sensing principles. This is an alternative paradigm to conventional image coding and is robust in nature. To obtain a sparse representation of the input, discrete wavelet transform is used and random complex Hadamard transform is used for obtaining CS measurements. At the decoder, sparse reconstruction is carried out using compressive sampling matching pursuit (CoSaMP) algorithm. We show that, the proposed CS method for image sampling and reconstruction is efficient in terms of complexity, quality and is comparable with some of the existing CS techniques. We also demonstrate that our method uses considerably less number of random measurements

    Not Available

    No full text
    Not AvailableNot AvailableNot Availabl

    Efficient synthesis of hydroxystyrenes via biocatalytic decarboxylation/deacetylation of substituted cinnamic acids by newly isolated Pantoea agglomerans strains

    No full text
    Not AvailableBACKGROUND: Decarboxylation of substituted cinnamic acids is a predominantly followed pathway for obtaining hydroxystyrenes – one of the most extensively explored bioactive compounds in the food and flavor industry (e.g. FEMA GRAS approved 4-vinylguaiacol). For this, mild and green strategies providing good yields with high product selectivity are needed. RESULTS: Two newly isolated bacterial strains, i.e. Pantoea agglomerans KJLPB4 and P. agglomerans KJPB2, are reported for mild and effective decarboxylation of substituted cinnamic acids into corresponding hydroxystyrenes. Key operational parameters for the process, such as incubation temperature, incubation time, substrate concentration and effect of co-solvent, were optimized using ferulic acid as a model substrate. With strain KJLPB4, 1.51 g L−1 4-vinyl guaiacol (98% yield) was selectively obtained from 2 g L−1 ferulic acid at 28 ◦C after 48 h incubation. However, KJPB2 provided vanillic acid in 85% yield after 72 h following the oxidative decarboxylation pathway. In addition, KJLPB4 was effectively exploited for the deacetylation of acetylated α-phenylcinnamic acids, providing corresponding compounds in 65–95% yields. CONCLUSION: Two newly isolated microbial strains are reported for the mild and selective decarboxylation of substituted cinnamic acids into hydroxystyrenes. Preparative-scale synthesis of vinyl guaiacol and utilization of renewable feedstock (ferulic acid extracted from maize bran) have been demonstrated to enhance the practical utility of the process.Not Availabl

    Two-symbol FPGA architecture for fast arithmetic encoding in JPEG 2000

    No full text
    JPEG 2000 is one of the most popular image compression standards offering significant performance advantages over previous image standards. High computational complexity of the JPEG 2000 algorithms makes it necessary to employ methods that overcomes the bottlenecks of the system and hence an efficient solution is imperative. One such crucial algorithms in JPEG 2000 is arithmetic coding and is completely based on bit level operations. In this paper, an efficient hardware implementation of arithmetic coding is proposed which uses efficient pipelining and parallel processing for intermediate blocks. The idea is to provide a two-symbol coding engine, which is efficient in terms of performance, memory and hardware. This architecture is implemented in Verilog hardware definition language and synthesized using Altera field programmable gate array. The only memory unit used in this design is a FIFO (first in first out) of 256 bits to store the CXD pairs at the input, which is negligible compared to the existing arithmetic coding hardware designs. The simulation and synthesis results show that the operating frequency of the proposed architecture is greater than 100 MHz and it achieves a throughput of 212 Msymbols/sec, which is double the throughput of conventional one-symbol implementation and enables at least 50% throughput increase compared to the existing two-symbol architectures

    Weather-Based Neural Network, Stepwise Linear and Sparse Regression Approach for Rabi Sorghum Yield Forecasting of Karnataka, India

    No full text
    Sorghum is an important dual-purpose crop of India grown for food and fodder. Prevailing weather conditions during the crop growth period determine the yield of sorghum. Hence, the crop yield forecasting models based on weather parameters will be an appropriate option for policymakers and researchers to develop sustainable cropping strategies. In the present study, six multivariate weather-based models viz., least absolute shrinkage and selection operator (LASSO), elastic net (ENET), principal component analysis (PCA) in combination with stepwise multiple linear regression (SMLR), artificial neural network (ANN) alone and in combination with PCA and ridge regression model are examined by fixing 90% of the data for calibration and remaining dataset for validation to forecast rabi sorghum yield for different districts of Karnataka. The R2 and root mean square error (RMSE) during calibration ranged between 0.42 to 0.98 and 30.48 to 304.17 kg ha−1, respectively, without actual evapotranspiration (AET) whereas, these evaluation parameters varied from 0.38 to 0.99 and 19.84 to 308.79 kg ha−1, respectively with AET inclusion. During validation, the RMSE and nRMSE (normalized root mean square error) varied between 88.99 to 1265.03 kg ha−1 and 4.49 to 96.84%, respectively without AET and including AET as one of the weather variable RMSE and nRMSE were 63.48 to 1172.01 kg ha−1 and 4.16 to 92.56%, respectively. The performance of six multivariate models revealed that LASSO was the best model followed by ENET compared to PCA_SMLR, ANN, PCA_ANN and ridge regression models because of reduced overfitting through penalisation of regression coefficient. Thus, it can be concluded that LASSO and ENET weather-based models can be effectively utilized for the district level forecast of sorghum yield

    Monitoring of Harmful Algal Bloom (HAB) of Noctiluca scintillans (Macartney) along the Gulf of Mannar, India using in-situ and satellite observations and its impact on wild and maricultured finfishes

    No full text
    In the Gulf of Mannar, Noctiluca scintillans blooms have been observed three times in September 2019, September and October 2020, and October 2021. It was determined and measured how the bloom period affects ichthyo-diversity. Noctiluca cell density varied slightly from year to year, ranging from1.8433 × 103 cells/L to 1.3824 x 106cells/L. In surface and sea bottom waters, high ammonia levels and low dissolved oxygen levels were noted. During the bloom period a significant increase in chlorophyll concentration was found. The amount of chlorophyll in GOM was extremely high, according to remote sensing photos made using MODIS-Aqua 4 km data. Acute hypoxia caused the death of wild fish near coral reefs and also in fish reared in sea cages. The decay of the bloom resulted in significant ammonia production, a dramatic drop in the amount of dissolved oxygen in the water, and ultimately stress, shock, and mass mortality of fishes

    Polyfunctional CD4 T-cells correlating with neutralising antibody is a hallmark of COVISHIELDTM and COVAXIN® induced immunity in COVID-19 exposed Indians

    No full text
    Abstract Detailed characterisation of immune responses induced by COVID-19 vaccines rolled out in India: COVISHIELDTM (CS) and COVAXIN® (CO) in a pre-exposed population is only recently being discovered. We addressed this issue in subjects who received their primary series of vaccination between November 2021 and January 2022. Both vaccines are capable of strongly boosting Wuhan Spike-specific neutralising antibody, polyfunctional Th1 cytokine producing CD4+ T-cells and single IFN-γ + CD8+ T-cells. Consistent with inherent differences in vaccine platform, the vector-based CS vaccine-induced immunity was of greater magnitude, breadth, targeting Delta and Omicron variants compared to the whole-virion inactivated vaccine CO, with CS vaccinees showing persistent CD8+ T-cells responses until 3 months post primary vaccination. This study provides detailed evidence on the magnitude and quality of CS and CO vaccine induced responses in subjects with pre-existing SARS-CoV-2 immunity in India, thereby mitigating vaccine hesitancy arguments in such a population, which remains a global health challenge
    corecore